Nonlinear Autoassociation Is Not Equivalent to PCA
نویسندگان
چکیده
A common misperception within the neural network community is that even with nonlinearities in their hidden layer, autoassociators trained with backpropagation are equivalent to linear methods such as principal component analysis (PCA). Our purpose is to demonstrate that nonlinear autoassociators actually behave differently from linear methods and that they can outperform these methods when used for latent extraction, projection, and classification. While linear autoassociators emulate PCA, and thus exhibit a flat or unimodal reconstruction error surface, autoassociators with nonlinearities in their hidden layer learn domains by building error reconstruction surfaces that, depending on the task, contain multiple local valleys. This interpolation bias allows nonlinear autoassociators to represent appropriate classifications of nonlinear multimodal domains, in contrast to linear autoassociators, which are inappropriate for such tasks. In fact, autoassociators with hidden unit nonlinearities can be shown to perform nonlinear classification and nonlinear recognition.
منابع مشابه
Free Vibration Analysis of Quintic Nonlinear Beams using Equivalent Linearization Method with a Weighted Averaging
In this paper, the equivalent linearization method with a weighted averaging proposed by Anh (2015) is applied to analyze the transverse vibration of quintic nonlinear Euler-Bernoulli beams subjected to axial loads. The proposed method does not require small parameter in the equation which is difficult to be found for nonlinear problems. The approximate solutions are harmonic oscillations, whic...
متن کاملNonlinear principal components analysis: introduction and application.
The authors provide a didactic treatment of nonlinear (categorical) principal components analysis (PCA). This method is the nonlinear equivalent of standard PCA and reduces the observed variables to a number of uncorrelated principal components. The most important advantages of nonlinear over linear PCA are that it incorporates nominal and ordinal variables and that it can handle and discover n...
متن کاملAn Infinitesimal Probabilistic Model for Principal Component Analysis of Manifold Valued Data
We provide a probabilistic and infinitesimal view of how the principal component analysis procedure (PCA) can be generalized to analysis of nonlinear manifold valued data. Starting with the probabilistic PCA interpretation of the Euclidean PCA procedure, we show how PCA can be generalized to manifolds in an intrinsic way that does not resort to linearization of the data space. The underlying pr...
متن کاملNonexpansive mappings on complex C*-algebras and their fixed points
A normed space $mathfrak{X}$ is said to have the fixed point property, if for each nonexpansive mapping $T : E longrightarrow E $ on a nonempty bounded closed convex subset $ E $ of $ mathfrak{X} $ has a fixed point. In this paper, we first show that if $ X $ is a locally compact Hausdorff space then the following are equivalent: (i) $X$ is infinite set, (ii) $C_0(X)$ is infinite dimensional, (...
متن کاملEvaluation of the SEAOC/UBC97 Provisions for the Tall Base-Isolated Structures
The base isolation systems are among the passive control devices that have been used over the last three decades to limit the seismic-induced response of structures. In this regard, the Uniform Building Code provisions (UBC) has incorporated a special section for the seismic design of base isolated structures since its 1991 edition. Due to the importance of the behavior of these structures unde...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید
ثبت ناماگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید
ورودعنوان ژورنال:
- Neural computation
دوره 12 3 شماره
صفحات -
تاریخ انتشار 2000